perm filename MOTIVA[F84,JMC] blob
sn#789516 filedate 1985-03-29 generic text, type C, neo UTF8
COMMENT ā VALID 00003 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 motiva[f84,jmc] What are human motivations
C00006 00003 Points to integrate
C00011 ENDMK
Cā;
motiva[f84,jmc] What are human motivations
It is common to the point of being a truism to account for
human actions as arising from maximizing utility functions.
Economists and game theorists even try to get by with a single
utility functions, suitably discounted as to the future. Rawls
refers to such a theory.
This view seems obviously mistaken. By obviously, I mean
that once pointed out I hope it will be obvious. Since many smart
people believe the theory, it can't be obvious without being pointed
out.
My opinion is that human action is motivated by a wide
variety of goals, many of which are peculiar to the individual
and his current state. They have causes, and people mistakenly
imagine that these causes make them subgoals of more primitive primary
goals. The reason that they aren't subgoals of primary goals
is that they acquire independence of primary goals and will
often be pursued at the expense of ``primary'' goals.
Admitting this, it might be claimed that this represents
a malfunction, an error. If it is pointed out to a person that
he is pursuing a secondary goal at the expense of the primary
goal that triggered it, perhaps he should give up the secondary
goal. Sometimes people will, but often they will claim you have
misrepresented the basic motivation.
The fact may be that people have second order goals, or
meta-goals. Meta-goals include goals of having goals. A person
may consciously acquire ``a purpose in life'' or something similar
of a less comprehensive nature --- or many of them. Once acquired
such a derived goal need not be subordinate. It may even supersede
the goal of survival and usually supersedes the goal of forming goals.
We need not regard such behavior as
pathological. There is no reason to suppose that primary individual
goals have a higher status.
Perhaps some babble about evolution will make this easier
to take. The ability of a human community to compete with others
depends on its members being able to sustain complex goals for
many years. Evolution isn't intelligent and cannot select for
the particular goals which are very far from being genetically
describable. Therefore, the ability to hold arbitrary goals
has survival value for the group. It allows goals to evolve
socially.
Points to integrate
Determining behavior by maximizing a utility function has the virtue
that it always decides what to do.
People and even dogs often experience internal conflicts of purpose. I want to
eat, but I know I'm too fat. These conflicts are not just computation
difficulties in determining what course of action maximizes a utility
function. Certainly they don't feel like computation difficulties.
When we remember that we are evolved from animals, what is going on
becomes clearer. Animals don't maximize utility functions, they respond
to drives --- hunger, sex, hostility, the drive to build a nest.
Each drive has evolved to meet particular survival and propagation
requirements. The animal functions best when only one drive is
operative at a time. A dog not in heat has no idea of what sex is
all about and doesn't plan for future sexual needs.
Humans have drives like animals. While drives can sometimes be
represented by utility functions, most likely this can't always
be done, and anyway it isn't an illuminating way of thinking about
them.
Humans are more complex than animals. We do plan for satisfaction
of drives expected in the future. Moreover, we have goals and
drives to satisfy goals. (I am not at present prepared to describe
comprehensively the relation between goals and drives to satisfy
them. However, an example of the distinction is the following.
I have promised to telephone someone and arrange a meeting, and
this becomes a goal. However, I have no drive to arrange meetings,
and I may be quite reluctant to make the call.).
Humans have meta-goals, i.e. goals about goals. A smoker wants
to get rid of his craving for cigarettes. Odysseus knew that if
he heard the sirens he would want to jump into the sea, and he
didn't want to do that. Therefore, he had his men tie him to
the mast and plug their ears.
A very important meta-goal is the goal of being rational.
Internal conflicts are unpleasant. A rational person has
a utility function or some other decision procedure covering
all situations or as many as possible. In so far as a person
can train himself to be rational, he can avoid conflicts.
Rationality in the sense indicated above doesn't say what the
goals are. A utility function could incorporate altruism or
selfishness, morality or immorality.
People often have the goal of being moral. They would like
to be able to behave according to rules given in advance without
future conflicts.
When we consider artificial intelligence, it seems that the
problem should simplify. An AI might have computational
difficulties, but it shouldn't have conflicts.